30 research outputs found

    Multimodality and Multiresolution Image Fusion

    Get PDF
    Standard multiresolution image fusion of multimodal images may yield an output image with artifacts due to the occurrence of opposite contrast in the input images. Equal but opposite contrast leads to noisy patches, instable with respect to slight changes in the input images. Unequal and opposite contrast leads to uncertainty of how to interpret the modality of the result. In this paper a biased fusion is proposed to remedy this, where the bias is towards one image, the so-called iconic image, in a preferred spectrum. A nonlinear fusion rule is proposed to prevent that the fused image reverses the local contrasts as seen in the iconic image. The rule involves saliency and a local match measure. The method is demonstrated by artificial and real-life examples

    Visible and Infrared Image Registration Employing Line-Based Geometric Analysis

    Get PDF
    We present a new method to register a pair of visible (ViS) and infrared (IR) images. Unlike most of existing systems that align interest points of two images, we align lines derived from edge pixels, because the interest points extracted from both images are not always identical, but most major edges detected from one image do appear in another image. To solve feature matching problem, we emphasize the geometric structure alignment of features (lines), instead of descriptor-based individual feature matching. This is due to the fact that image properties and patch statistics of corresponding features might be quite different, especially when one compares ViS image with long wave IR images (thermal information). However, the spatial layout of features for both images always preserves consistency. The last step of our algorithm is to compute the image transform matrix, given minimum 4 pairs of line correspondence. The comparative evaluation for algorithms demonstrates higher accuracy attained by our method when compared to the state-of-the-art approaches

    Employing a RGB-D Sensor for Real-Time Tracking of Humans across Multiple Re-Entries in a Smart Environment

    Get PDF
    The term smart environment refers to physical spaces equipped with sensors feeding into adaptive algorithms that enable the environment to become sensitive and responsive to the presence and needs of its occupants. People with special needs, such as the elderly or disabled people, stand to benefit most from such environments as they offer sophisticated assistive functionalities supporting independent living and improved safety. In a smart environment, the key issue is to sense the location and identity of its users. In this paper, we intend to tackle the problems of detecting and tracking humans in a realistic home environment by exploiting the complementary nature of (synchronized) color and depth images produced by a low-cost consumer-level RGB-D camera. Our system selectively feeds the complementary data emanating from the two vision sensors to different algorithmic modules which together implement three sequential components: (1) object labeling based on depth data clustering, (2) human re-entry identification based on comparing visual signatures extracted from the color (RGB) information, and (3) human tracking based on the fusion of both depth and RGB data. Experimental results show that this division of labor improves the system’s efficiency and classification performance

    Fast saliency-aware multi-modality image fusion

    No full text
    \u3cp\u3eThis paper proposes a saliency-aware fusion algorithm for integrating infrared (IR) and visible light (ViS) images (or videos) with the aim to enhance the visualization of the latter. Our algorithm involves saliency detection followed by a biased fusion. The goal of the saliency detection is to generate a saliency map for the IR image, highlighting the co-occurrence of high brightness values ( hot spots ) and motion. Markov Random Fields (MRFs) are used to combine these two sources of information. The subsequent fusion step is employed to bias the end result in favor of the ViS image, except when a region shows clear IR saliency, in which case the IR image gains (local) dominance. By doing so, the fused image succeeds in depicting both the salient foreground object (gleaned from the IR image), against as an easily recognizable background as supplied by the ViS image. An evaluation of the proposed saliency detection method indicates improvements in detection accuracy when compared to state-of-the-art alternatives. Moreover, both objective and subjective assessments reveal the effectiveness of the proposed fusion algorithm in terms of visual context enhancement.\u3c/p\u3

    High-fidelity inhomogeneous ground clutter simulation of airborne phased array PD radar aided by digital elevation model and digital land classification data

    No full text
    This paper presents a high-fidelity inhomogeneous ground clutter simulation method for airborne phased array Pulse Doppler (PD) radar aided by a digital elevation model (DEM) and digital land classification data (DLCD). The method starts by extracting the basic geographic information of the Earth’s surface scattering points from the DEM data, then reads the Earth’s surface classification codes of Earth’s surface scattering points according to the DLCD. After determining the landform types, different backscattering coefficient models are selected to calculate the backscattering coefficient of each Earth surface scattering point. Finally, the high-fidelity inhomogeneous ground clutter simulation of airborne phased array PD radar is realized based on the Ward model. The simulation results show that the classifications of landform types obtained by the proposed method are more abundant, and the ground clutter simulated by different backscattering coefficient models is more real and effective. © 2018 by the authors. Licensee MDPI, Basel, Switzerland

    Fast saliency-aware multi-modality image fusion

    No full text
    This paper proposes a saliency-aware fusion algorithm for integrating infrared (IR) and visible light (ViS) images (or videos) with the aim to enhance the visualization of the latter. Our algorithm involves saliency detection followed by a biased fusion. The goal of the saliency detection is to generate a saliency map for the IR image, highlighting the co-occurrence of high brightness values ("hot spots") and motion. Markov Random Fields (MRFs) are used to combine these two sources of information. The subsequent fusion step is employed to bias the end result in favor of the ViS image, except when a region shows clear IR saliency, in which case the IR image gains (local) dominance. By doing so, the fused image succeeds in depicting both the salient foreground object (gleaned from the IR image), against as an easily recognizable background as supplied by the ViS image. An evaluation of the proposed saliency detection method indicates improvements in detection accuracy when compared to state-of-the-art alternatives. Moreover, both objective and subjective assessments reveal the effectiveness of the proposed fusion algorithm in terms of visual context enhancement
    corecore